43 research outputs found

    A new method to determine multi-angular reflectance factor from lightweight multispectral cameras with sky sensor in a target-less workflow applicable to UAV

    Full text link
    A new physically based method to estimate hemispheric-directional reflectance factor (HDRF) from lightweight multispectral cameras that have a downwelling irradiance sensor is presented. It combines radiometry with photogrammetric computer vision to derive geometrically and radiometrically accurate data purely from the images, without requiring reflectance targets or any other additional information apart from the imagery. The sky sensor orientation is initially computed using photogrammetric computer vision and revised with a non-linear regression comprising radiometric and photogrammetry-derived information. It works for both clear sky and overcast conditions. A ground-based test acquisition of a Spectralon target observed from different viewing directions and with different sun positions using a typical multispectral sensor configuration for clear sky and overcast showed that both the overall value and the directionality of the reflectance factor as reported in the literature were well retrieved. An RMSE of 3% for clear sky and up to 5% for overcast sky was observed

    Making Background Subtraction Robust to Sudden Illumination Changes

    Get PDF
    Modern background subtraction techniques can handle gradual illumination changes but can easily be confused by rapid ones. We propose a technique that overcomes this limitation by relying on a statistical model, not of the pixel intensities, but of the illumination effects. Because they tend to affect whole areas of the image as opposed to individual pixels, low-dimensional models are appropriate for this purpose and make our method extremely robust to illumination changes, whether slow or fast

    AUTOMATIC MAPPING FROM ULTRA-LIGHT UAV IMAGERY

    Get PDF
    This paper presents an affordable, fully automated and accurate mapping solutions based on ultra-light UAV imagery, which is commercialized by Pix4D. We show interesting application in the field of UAV mapping, analyse the accuracy of the automated processing on several datasets. The accuracy highly depends on the ground resolution (flying height) of the input imagery. When chosen appropriately this mapping solution can compete with traditional mapping solutions that capture fewer high-resolution images from airplanes and that rely on highly accurate orientation and positioning sensors on board. Due to the careful integration with recent computer vision techniques, the result is robust and fully automatic and can deal with inaccurate position and orientation information which are typically problematic with traditional techniques

    Dynamic and Scalable Large Scale Image Reconstruction

    Get PDF
    Recent approaches to reconstructing city-sized areas from large image collections usually process them all at once and only produce disconnected descriptions of image subsets, which typically correspond to major landmarks. In contrast, we propose a framework that lets us take advantage of the available meta-data to build a single, consistent description from these potentially disconnected descriptions. Furthermore, this description can be incrementally updated and enriched as new images become avail- able. We demonstrate the power of our approach by building large-scale reconstructions using images of Lausanne and Prague

    Efficient Large Scale Multi-View Stereo for Ultra High Resolution Image Sets

    Get PDF
    We present a new approach for large scale multi-view stereo matching, which is designed to operate on ultra high resolution image sets and efficiently compute dense 3D point clouds. We show that, by using a robust descriptor for matching purposes and high resolution images, we can skip the computationally expensive steps other algorithms require. As a result, our method has low memory requirements and low computational complexity while producing 3D point clouds containing virtually no outliers. This makes it exceedingly suitable for large scale reconstruction. The core of our algorithm is the dense matching of image pairs using DAISY descriptors, implemented so as to eliminate redundancies and optimize memory access. We use a variety of challenging data sets to validate and compare our results against other algorithms

    Wide-baseline Stereo from Multiple Views: a Probabilistic Account

    Get PDF
    This paper describes a method for dense depth reconstruction from a small set of wide-baseline images. In a widebaseline setting an inherent difficulty which complicates the stereo-correspondence problem is self-occlusion. Also, we have to consider the possibility that image pixels in different images, which are projections of the same point in the scene, will have different color values due to non-Lambertian effects or discretization errors. We propose a Bayesian approach to tackle these problems. In this framework, the images are regarded as noisy measurements of an underlying ’true’ image-function. Also, the image data is considered incomplete, in the sense that we do not know which pixels from a particular image are occluded in the other images. We describe an EM-algorithm, which iterates between estimating values for all hidden quantities, and optimizing the current depth estimates. The algorithm has few free parameters, displays a stable convergence behavior and generates accurate depth estimates. The approach is illustrated with several challenging real-world examples. We also show how the algorithm can generate realistic view interpolations and how it merges the information of all images into a new, synthetic view

    Combined Depth and Outlier Estimation in Multi-View Stereo

    Get PDF
    In this paper, we present a generative model based approach to solve the multi-view stereo problem. The input images are considered to be generated by either one of two processes: (i) an inlier process, which generates the pixels which are visible from the reference camera and which obey the constant brightness assumption, and (ii) an outlier process which generates all other pixels. Depth and visibility are jointly modelled as a hiddenMarkov Random Field, and the spatial correlations of both are explicitly accounted for. Inference is made tractable by an EM-algorithm, which alternates between estimation of visibility and depth, and optimisation of model parameters. We describe and compare two implementations of the E-step of the algorithm, which correspond to the Mean Field and Bethe approximations of the free energy. The approach is validated by experiments on challenging real-world scenes, of which two are contaminated by independently moving objects

    PHOTOGRAMMETRIC PERFORMANCE OF AN ULTRA LIGHT WEIGHT SWINGLET UAV

    Get PDF
    Low cost mapping using UAV technology is becoming a trendy topic. Many systems exist where a simple camera can be deployed to take images, generally georeferenced with a GPS chip and MEMS attitude sensors.<br> The step from using those images as information picture to photogrammetric products with geo-reference, such as digital terrain model (DTM) or orthophotos is not so big. New development in the field of image correlation allow matching rapidly and accurately images together, build a relative orientation of an image block, extract a DTM and produce orthoimage through a web server.<br>The following paper focuses on the photogrammetric performance of an ultra light UAV equipped with a compact 12Mpix camera combined with online data processes provided by Pix4D.<br> First, the step of image orientation is studied with the camera calibration step, thus the DTM extraction will be compared with conventional results from conventional photogrammetric software, new generation technique of pixel correlation and with reference data issued from high density laser scanning. The quality of orthoimage is presented in terms of quality and geometric accuracy

    LDAHash: Improved matching with smaller descriptors

    Get PDF
    SIFT-like local feature descriptors are ubiquitously employed in such computer vision applications as content-based retrieval, video analysis, copy detection, object recognition, photo-tourism and 3D reconstruction. Feature descriptors can be designed to be invariant to certain classes of photometric and geometric transformations, in particular, affine and intensity scale transformations. However, real transformations that an image can undergo can only be approximately modeled in this way, and thus most descriptors are only approximately invariant in practice. Secondly, descriptors are usually high-dimensional (e.g. SIFT is represented as a 128-dimensional vector). In large-scale retrieval and matching problems, this can pose challenges in storing and retrieving descriptor data. We map the descriptor vectors into the Hamming space, in which the Hamming metric is used to compare the resulting representations. This way, we reduce the size of the descriptors by representing them as short binary strings and learn descriptor invariance from examples. We show extensive experimental validation, demonstrating the advantage of the proposed approach
    corecore